15 research outputs found
Exploring Users Pointing Performance on Large Displays with Different Curvatures in Virtual Reality
Large curved displays inside Virtual Reality environments are becoming
popular for visualizing high-resolution content during analytical tasks, gaming
or entertainment. Prior research showed that such displays provide a wide field
of view and offer users a high level of immersion. However, little is known
about users' performance (e.g., pointing speed and accuracy) on them. We
explore users' pointing performance on large virtual curved displays. We
investigate standard pointing factors (e.g., target width and amplitude) in
combination with relevant curve-related factors, namely display curvature and
both linear and angular measures. Our results show that the less curved the
display, the higher the performance, i.e., faster movement time. This result
holds for pointing tasks controlled via their visual properties (linear widths
and amplitudes) or their motor properties (angular widths and amplitudes).
Additionally, display curvatures significantly affect the error rate for both
linear and angular conditions. Furthermore, we observe that curved displays
perform better or similar to flat displays based on throughput analysis.
Finally, we discuss our results and provide suggestions regarding pointing
tasks on large curved displays in VR.Comment: IEEE Transactions on Visualization and Computer Graphics (2023
Exploring Users' Pointing Performance on Virtual and Physical Large Curved Displays
Large curved displays have emerged as a powerful platform for collaboration,
data visualization, and entertainment. These displays provide highly immersive
experiences, a wider field of view, and higher satisfaction levels. Yet, large
curved displays are not commonly available due to their high costs. With the
recent advancement of Head Mounted Displays (HMDs), large curved displays can
be simulated in Virtual Reality (VR) with minimal cost and space requirements.
However, to consider the virtual display as an alternative to the physical
display, it is necessary to uncover user performance differences (e.g.,
pointing speed and accuracy) between these two platforms. In this paper, we
explored users' pointing performance on both physical and virtual large curved
displays. Specifically, with two studies, we investigate users' performance
between the two platforms for standard pointing factors such as target width,
target amplitude as well as users' position relative to the screen. Results
from user studies reveal no significant difference in pointing performance
between the two platforms when users are located at the same position relative
to the screen. In addition, we observe users' pointing performance improves
when they are located at the center of a semi-circular display compared to
off-centered positions. We conclude by outlining design implications for
pointing on large curved virtual displays. These findings show that large
curved virtual displays are a viable alternative to physical displays for
pointing tasks.Comment: In 29th ACM Symposium on Virtual Reality Software and Technology
(VRST 2023
MultiFingerBubble: A 3D Bubble Cursor Variation for Dense Environments
In this work, we propose MultiFingerBubble, a new variation of the 3D Bubble Cursor. The 3D Bubble Cursor is sensitive to distractors in dense environments: the volume selection resizes to snap-to nearby targets. To prevent the cursor to constantly re-snap to neighboring targets, MultiFingerBubble includes multiple targets in the volume selection, and hence increases the targets effective width. Each target in the volume selection is associated with a specific finger. Users can then select a target by flexing its corresponding finger. We report on a controlled in-lab experiment to explore various design options regarding the number of fingers to use, and the target-to-finger mapping and its visualization. Our study results suggest that MultiFingerBubble is best used with three fingers and colored lines to reveal the mapping between targets and fingers
A-Coord Input: Coordinating Auxiliary Input Streams for Augmenting Contextual Pen-Based Interactions
The human hand can naturally coordinate multiple finger joints, and simultaneously tilt, press and roll a pen to write or draw. For this reason, digital pens are now embedded with auxiliary input sensors to capture these actions. Prior research on auxiliary input channels has mainly investigated them in isolation of one another. In this work, we explore the coordinated use of two auxiliary channels, a class of interaction techniques we refer to as a-coord input. Through two separate experiments, we explore the design space of a-coord input. In the first study we identify if users can successfully coordinate two auxiliary channels. We found a strong degree of coordination between channels. In a second experiment, we evaluate the effectiveness of a-coord input in a task with multiple steps, such as multiparameter selection and manipulation. We find that a-coord input facilitates coordination even with a complex, aforethought sequential task. Overall our results indicate that users can control at least two auxiliary input channels in conjunction which can facilitate a number of common tasks can on the pen. Author Keywords Pen-based interaction; pen roll; pen pressure; pen tilt; dual-channel input. ACM Classification Keyword
Preferences for Mobile-Supported e-Cigarette Cessation Interventions Among Young Adults: Qualitative Descriptive Study
BackgroundDespite the steady rise in electronic cigarette (e-cigarette) uptake among young adults, increasingly more young people want to quit. Given the popularity of smartphones among young adults, mobile-based e-cigarette cessation interventions hold significant promise. Smartphone apps are particularly promising due to their varied and complex capabilities to engage end users. However, evidence around young adults’ preferences and expectations from an e-cigarette cessation smartphone app remains unexplored.
ObjectiveThe purpose of this study was to take an initial step toward understanding young adults’ preferences and perceptions on app-based e-cigarette cessation interventions.
MethodsUsing a qualitative descriptive approach, we interviewed 12 young adults who used e-cigarettes and wanted to quit. We inductively derived themes using the framework analysis approach and NVivo 12 qualitative data analysis software.
ResultsAll participants agreed that a smartphone app for supporting cessation was desirable. In addition, we found 4 key themes related to their preferences for app components: (1) flexible personalization (being able to enter and modify goals); (2) e-cigarette behavior tracking (progress and benefits of quitting); (3) safely managed social support (moderated and anonymous); and (4) positively framed notifications (encouraging and motivational messages). Some gender-based differences indicate that women were more likely to use e-cigarettes to cope with stress, preferred more aesthetic tailoring in the app, and were less likely to quit cold turkey compared with men.
ConclusionsThe findings provide direction for the development and testing of app-based e-cigarette cessation interventions for young adults
Smartwatches + Head-Worn Displays: the "New" Smartphone
We are exploring whether two currently mass marketed wearable devices, the smartwatch (SW) and head-worn displays (HWDs) can replace and go beyond the capabilities of the mobile smartphone. While smartphones have become indispensable in our everyday activities, they do not possess the same level of integration that wearable devices afford. To explore the question of whether and how smartphones can be replaced with a new form factor, we present methods for considering how best to resolve the limited input and display capabilities of wearables to achieve this vision. These devices are currently designed in isolation of on another and it is as yet unclear how multiple devices will coexist in a wearable ecosystem. We discuss how this union addresses the limitations of each device, by expanding the available interaction space, increasing availability of information and mitigating occlusion. We propose a design space for joint interactions and illustrate it with several techniques
Exploring Social Acceptability and Users’ Preferences of Head- and Eye-Based Interaction with Mobile Devices
Advancements in eye-tracking technology has compelled researchers to explore potential eye-based interactions with diverse devices. Though many commercial devices are now equipped with eye-tracking solutions (e.g., HTC VIVE Pro), little is known about users social acceptance and preference of eye-based interaction techniques, especially with smartphones. We report on three studies to explore users’ social acceptance and preferences regarding different head- and eye-based inputs with smartphones. Study results show that eye movements are more socially acceptable than other head- and eye-based techniques due to its subtle nature. Based on these findings, we further examine users preferences regarding saccade and pursuit eye movements. Results reveal users’ preference for saccade compared to pursuit eye movements. In a third study exploring delimiting actions to discriminate between intentional and unintentional eye-inputs, Dwell is shown as the preferred delimiter, both in public and private spaces. We conclude with design guidelines for eye-based interactions on smartphones